One of the cases, which were filed Wednesday in federal court in San Francisco against OpenAI and its chief executive officer,
According to the lawsuits, OpenAI knew that Jesse Van Rootselaar, who was identified as the chief suspect behind the massacre in February at Tumbler Ridge Secondary School, was planning the attack due to the shooter’s ChatGPT use, but made a “conscious decision not to warn authorities.”
“ChatGPT played a role in the mass shooting and OpenAI could have, and should have, prevented it,” according to the complaints, which allege the startup wanted to avoid having to contact police each time OpenAI’s safety team spotted a ChatGPT user planning to carry out a violent act.
“The events in Tumbler Ridge are a tragedy. We have a zero-tolerance policy for using our tools to assist in committing violence,” OpenAI said in a statement. “As we shared with Canadian officials, we have already strengthened our safeguards, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how we assess and escalate potential threats of violence, and improving detection of repeat policy violators.”
A series of suits have been filed so far against chatbot makers since 2024, most of them targeting OpenAI and ChatGPT. Most of the suits allege that extensive use of the technology has inflicted a range of harms on children and adults alike, fostering
On Feb. 10, Van Rootselaar allegedly carried out the mass shooting in northeastern British Columbia, killing eight people — including her mother and stepbrother, along with six others at the school, five of whom were children, and injuring more than two dozen others. Van Rootselaar, 18, was found dead after the shooting from what appeared to be a self-inflicted wound.
In the wake of the shooting, OpenAI said it banned Van Rootselaar for violating its ChatGPT usage policy last June. Her account was flagged at the time for messages deemed to have potential for violence, but OpenAI did not alert police. The Wall Street Journal first reported on OpenAI’s decision, saying concerned employees urged the startup to report the situation to authorities.
Later in February, OpenAI revealed that the suspected killer created a second ChatGPT account it did not spot until her name was released by police; OpenAI
Last week, Altman wrote in a letter published by Tumbler RidgeLines, a local news site, that he wanted to express his “deepest condolences to the entire community.”
“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman wrote. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”
Read More:
The lawsuits come at a sensitive time for OpenAI, which is eyeing a much-anticipated public offering that’s poised to be one of the
OpenAI is also trying to fend off claims by
(Updated with statement from OpenAI in fifth paragraph.)
--With assistance from
To contact the reporter on this story:
To contact the editors responsible for this story:
Peter Blumberg, Ben Bain
© 2026 Bloomberg L.P. All rights reserved. Used with permission.
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
